You have an idea. Now you need to think like an AI engineer โ not just what it does, but how it works, what it needs, and what can go wrong.
What type of learning? What model? Why?
Training data sources, quality risks, bias mitigation
Who could be harmed? Where does it break?
What data comes in? Text, image, sensor, user action?
What ML type? What's it been trained on?
Prediction, classification, generated content, recommendation?
How does the system improve over time?
Who collected it? When? Does it represent your actual users, or just some of them?
Historical bias (reflects old injustices), sampling bias (certain groups missing), label bias (humans who labeled it had their own views).
Diverse data collection, fairness metrics, human oversight, regular audits, transparency about limitations.
Every AI system affects people. Your pitch must show you've thought this through.
If your system makes a wrong prediction โ who suffers? Is that harm minor (wrong movie rec) or serious (wrong medical diagnosis)?
Does the user know their data is being used? Is there a way to opt out? What happens to their data?
Could a bad actor exploit your system? Could it be used for surveillance, discrimination, or manipulation?
For high-stakes decisions (medical, legal, financial), should a human always have final say? How is that built in?
Many ideas will involve a chatbot, content generator, or language model. Teams using LLMs must address these specifically:
LLMs can confidently state false information. How does your system handle or flag this? What's the risk if a user trusts it?
Malicious users can try to trick your model with crafted inputs. Have you thought about how users might abuse your system?
Most LLMs are trained predominantly on English, Western data. How does this affect your use case, especially if aimed at Lebanon or Arabic speakers?
Training and running large language models consumes significant energy. Is the benefit worth the cost? How might you minimise it?
Two teams volunteer (or are chosen). They explain their system design. The class asks hard questions.
Start with a story, statistic, or scenario. Make the audience feel the problem before you introduce your solution.
What your system does + the ML type and process. Include a diagram, mockup, or user flow.
Training data sources, known biases, and your mitigation approach. Show you've thought critically.
Who could be harmed, failure cases, and what your system should never be used for.
What you'd improve, expand, or research next. Shows ambition and depth of thinking.
Completed during today's class. All 5 design questions answered in detail.
6 slides. Rough is fine. Must cover all 6 required pitch elements. Bring to Week 3 for practice.
Week 3 is entirely dedicated to pitch rehearsal and feedback. Come prepared to present your full deck.